- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0005000000000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Mahloujifar, S. (3)
-
Etesami, O. (2)
-
Mahloujifar, S (2)
-
Mahmoody, M. (2)
-
& Mahmoody, M. (1)
-
Carlini, N. (1)
-
Deng, S. (1)
-
Evans, D (1)
-
Gao, J. (1)
-
Garg, S (1)
-
Garg, S. (1)
-
Jha, S (1)
-
Jha, S. (1)
-
Mahmoody, M (1)
-
Suri, A (1)
-
Suya, F. (1)
-
Thakurta, A (1)
-
Y. Tian (1)
-
and Tramèr, F. (1)
-
#Tyler Phillips, Kenneth E. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Suya, F.; Mahloujifar, S; Suri, A; Evans, D; Y. Tian (, Proceedings of the 38 th International Conference on Machine Learning)
-
Carlini, N.; Deng, S.; Garg, S.; Jha, S.; Mahloujifar, S.; Mahmoody, M.; Thakurta, A; and Tramèr, F. (, IEEE)
-
Etesami, O.; Mahloujifar, S.; Mahmoody, M. (, ACM)
-
Garg, S; Jha, S; Mahloujifar, S; Mahmoody, M (, International Conference on Algorithmic Learning Theory)Over recent years, devising classification algorithms that are robust to adversarial perturbations hasemerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible tosmall imperceptible changes over test instances. However, the line of work inprovablerobustness,so far, has been focused oninformation theoreticrobustness, ruling out even theexistenceof anyadversarial examples. In this work, we study whether there is a hope to benefit fromalgorithmicnature of an attacker that searches for adversarial examples, and ask whether there isanylearning taskfor which it is possible to design classifiers that are only robust againstpolynomial-timeadversaries.Indeed, numerous cryptographic tasks (e.g. encryption of long messages) can only be secure againstcomputationally bounded adversaries, and are indeedimpossiblefor computationally unboundedattackers. Thus, it is natural to ask if the same strategy could help robust learning.We show that computational limitation of attackers can indeed be useful in robust learning bydemonstrating the possibility of a classifier for some learning task for which computational andinformation theoretic adversaries of bounded perturbations have very different power. Namely, whilecomputationally unbounded adversaries can attack successfully and find adversarial examples withsmall perturbation, polynomial time adversaries are unable to do so unless they can break standardcryptographic hardness assumptions. Our results, therefore, indicate that perhaps a similar approachto cryptography (relying on computational hardness) holds promise for achieving computationallyrobust machine learning. On the reverse directions, we also show that the existence of such learningtask in which computational robustness beats information theoretic robustness requires computationalhardness by implying (average-case) hardness o fNP.more » « less
An official website of the United States government

Full Text Available